Opponent-motion mechanisms are self-normalizing
نویسندگان
چکیده
منابع مشابه
Opponent-motion mechanisms are self-normalizing
In the ultimate stage of the Adelson-Bergen motion energy model [Adelson, E. H., & Bergen, J. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America, 2, 284-299], motion is derived from the difference between directionally opponent energies E(L) and E(R). However, Georgeson and Scott-Samuel [Georgeson, M. A., & Scott-Samuel, N. E. (1999). Mo...
متن کاملInterocular interactions reveal the opponent structure of motion mechanisms
Interactions between motion sensors tuned to the same and to opposite directions were probed by means of measuring summation indexes for sensitivities (d') to contrast increments and/or decrements applied to drifting gratings presented in binocular and in dichoptic vision. The data confirm a phenomenon described by Stromeyer, Kronauer, Madsen & Klein (1984, J. Opt. Soc. Am. A 1, 876-884), where...
متن کاملMechanisms for Opponent Modelling
In various competitive game contexts, gathering information about one’s opponent and relying on it for planning a strategy has been the dominant approach for numerous researchers who deal with what in game theoretic terms is known as the best response problem. This approach is known as opponent modelling. The general idea is given a model of one’s adversary to rely on it for simulating the poss...
متن کاملWhen and why are log-linear models self-normalizing?
Several techniques have recently been proposed for training “self-normalized” discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expe...
متن کاملSelf-Normalizing Neural Networks
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Vision Research
سال: 2005
ISSN: 0042-6989
DOI: 10.1016/j.visres.2004.10.018